在过去的几年中,对抗性示例的检测一直是一个热门话题,因为它对于在关键应用程序中安全部署机器学习算法的重要性。但是,通常通过假设一个隐式已知的攻击策略来验证检测方法,这不一定要考虑现实生活中的威胁。确实,这可能导致对检测器性能的过度评估,并可能在竞争检测方案之间的比较中引起一些偏见。我们提出了一个新型的多武器框架,称为Mead,用于根据几种攻击策略来评估探测器,以克服这一限制。其中,我们利用三个新目标来产生攻击。所提出的性能指标基于最坏的情况:仅当正确识别所有不同攻击时,检测才成功。从经验上讲,我们展示了方法的有效性。此外,最先进的探测器获得的表现不佳,为一项新的令人兴奋的研究开放。
translated by 谷歌翻译
对抗性的鲁棒性已成为机器学习越来越兴趣的话题,因为观察到神经网络往往会变得脆弱。我们提出了对逆转防御的信息几何表述,并引入Fire,这是一种针对分类跨透明镜损失的新的Fisher-Rao正则化,这基于对应于自然和受扰动输入特征的软磁输出之间的测量距离。基于SoftMax分布类的信息几何特性,我们为二进制和多类案例提供了Fisher-Rao距离(FRD)的明确表征,并绘制了一些有趣的属性以及与标准正则化指标的连接。此外,对于一个简单的线性和高斯模型,我们表明,在精度 - 舒适性区域中的所有帕累托最佳点都可以通过火力达到,而其他最先进的方法则可以通过火灾。从经验上讲,我们评估了经过标准数据集拟议损失的各种分类器的性能,在清洁和健壮的表现方面同时提高了1 \%的改进,同时将培训时间降低了20 \%,而不是表现最好的方法。
translated by 谷歌翻译
We propose an efficient and generative augmentation approach to solve the inadequacy concern of underwater debris data for visual detection. We use cycleGAN as a data augmentation technique to convert openly available, abundant data of terrestrial plastic to underwater-style images. Prior works just focus on augmenting or enhancing existing data, which moreover adds bias to the dataset. Compared to our technique, which devises variation, transforming additional in-air plastic data to the marine background. We also propose a novel architecture for underwater debris detection using an attention mechanism. Our method helps to focus only on relevant instances of the image, thereby enhancing the detector performance, which is highly obliged while detecting the marine debris using Autonomous Underwater Vehicle (AUV). We perform extensive experiments for marine debris detection using our approach. Quantitative and qualitative results demonstrate the potential of our framework that significantly outperforms the state-of-the-art methods.
translated by 谷歌翻译
Photo-identification (photo-id) is one of the main non-invasive capture-recapture methods utilised by marine researchers for monitoring cetacean (dolphin, whale, and porpoise) populations. This method has historically been performed manually resulting in high workload and cost due to the vast number of images collected. Recently automated aids have been developed to help speed-up photo-id, although they are often disjoint in their processing and do not utilise all available identifying information. Work presented in this paper aims to create a fully automatic photo-id aid capable of providing most likely matches based on all available information without the need for data pre-processing such as cropping. This is achieved through a pipeline of computer vision models and post-processing techniques aimed at detecting cetaceans in unedited field imagery before passing them downstream for individual level catalogue matching. The system is capable of handling previously uncatalogued individuals and flagging these for investigation thanks to catalogue similarity comparison. We evaluate the system against multiple real-life photo-id catalogues, achieving mAP@IOU[0.5] = 0.91, 0.96 for the task of dorsal fin detection on catalogues from Tanzania and the UK respectively and 83.1, 97.5% top-10 accuracy for the task of individual classification on catalogues from the UK and USA.
translated by 谷歌翻译
The efficiency of using the YOLOV5 machine learning model for solving the problem of automatic de-tection and recognition of micro-objects in the marine environment is studied. Samples of microplankton and microplastics were prepared, according to which a database of classified images was collected for training an image recognition neural network. The results of experiments using a trained network to find micro-objects in photo and video images in real time are presented. Experimental studies have shown high efficiency, comparable to manual recognition, of the proposed model in solving problems of detect-ing micro-objects in the marine environment.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
对异常域特定视频集的有效分析是一个重要的实践问题,在该问题中,最新的通用模型仍面临局限性。因此,希望设计基准数据集,以挑战具有其他约束的特定领域的新型强大模型。重要的是要记住,特定域的数据可能更嘈杂(例如,内窥镜或水下视频),并且通常需要更多经验丰富的用户才能有效搜索。在本文中,我们专注于从水下环境中移动相机拍摄的单次视频,这构成了研究目的的非平凡挑战。提出了新的海洋视频套件数据集的第一个碎片,用于用于视频检索和其他计算机视觉挑战。除了基本的元数据统计数据外,我们还基于低级特征以及所选密钥帧的语义注释提供了几个见解和参考图。该分析还包含实验,显示了检索受人尊敬的通用模型的局限性。
translated by 谷歌翻译
本文报告了对使用一辆或多种无人地面车辆(USV)快速识别通道的快速识别通道问题的研究。一种称为基于建议的自适应通道搜索(PBAC)的新算法作为一种潜在的解决方案,可改善当前方法。将PBAC的经验性能与割草机测量和马尔可夫决策过程(MDP)计划进行了比较,该计划具有两个最先进的奖励功能:上限置信度(UCB)和最大价值信息(MVI)。通过比较使用一个,两个,三个或四个USV识别连续通道的时间来评估每种方法的性能。在十个模拟的测深场景和一个野外区域中比较每种方法的性能,每种方法都有不同的频道布局。模拟和现场试验的结果表明,平均多车辆PBAC优于基于割草机,UCB和基于MVI的方法,尤其是在使用至少三辆车辆时。
translated by 谷歌翻译
全球团队通常由基于语言的亚组组成,这些子组将互补信息汇总在一起以实现共同的目标。先前的研究概述了这些团队的两步工作沟通流。有团队会议使用所需的通用语言(即英语);为了准备这些会议,人们以母语为母语的对话。在团队会议上的工作沟通通常不如亚组对话效率。在当前的研究中,我们研究了利用机器翻译(MT)的想法,以促进全球团队会议。我们假设在团队会议之前交换子组对话日志会提供上下文信息,从而受益于团队合作。 MT可以翻译这些日志,这可以以低成本的方式理解。为了检验我们的假设,我们进行了一个受试者间实验,其中有20名参与者执行了人事选择任务。每个四重奏包括两名英语母语者(NS)和两个母语是普通话的非母语说话者(NNS)。所有参与者都以其母语的亚组对话开始了这项任务,然后以英语开始了团队会议。我们在团队会议之前操纵了子组对话日志的交换:MT介导的交流与没有。分析参与者的主观经验,任务绩效和讨论深度通过他们的对话举动所反映的,这表明当MT介导的亚组对话日志交流而不是没有交流时,团队会议质量会提高。最后,我们对何时以及如何应用MT进行了思考,以增强语言障碍的全球团队合作。
translated by 谷歌翻译
数十年来,源自人类活动的海洋碎片一直在海洋,湖泊和河流等水下环境中积累。由于无法理解散布的确切机制,因此难以评估废物的程度,类型和数量,从而对海洋环境和人类健康产生了未知的后果。因此,用于检测和映射海洋碎片的方法对于洞悉污染动力学至关重要,而污染动态又可以用来有效地计划和执行物理去除。使用配备了水下高光谱成像仪(UHI)和立体声相机的自动驾驶水下车辆(AUV),在挪威卑尔根贝尔根的庇护海湾商店Lungegaardsvann中自主检测,映射和量化了海洋碎片。
translated by 谷歌翻译